On-line learning through simple perceptron learning with a margin
نویسندگان
چکیده
We analyze a learning method that uses a margin kappa a la Gardner for simple perceptron learning. This method corresponds to the perceptron learning when kappa = 0 and to the Hebbian learning when kappa = infinity. Nevertheless, we found that the generalization ability of the method was superior to that of the perceptron and the Hebbian methods at an early stage of learning. We analyzed the asymptotic property of the learning curve of this method through computer simulation and found that it was the same as for perceptron learning. We also investigated an adaptive margin control method.
منابع مشابه
Statistical Mechanics of On-line Ensemble Teacher Learning through a Novel Perceptron Learning Rule
In ensemble teacher learning, ensemble teachers have only uncertain information about the true teacher, and this information is given by an ensemble consisting of an infinite number of ensemble teachers whose variety is sufficiently rich. In this learning, a student learns from an ensemble teacher that is iteratively selected randomly from a pool of many ensemble teachers. An interesting point ...
متن کاملA New Approximate Maximal Margin Classification Algorithm
A new incremental learning algorithm is described which approximates the maximal margin hyperplane w.r.t. norm p ≥ 2 for a set of linearly separable data. Our algorithm, called almap (Approximate Large Margin algorithm w.r.t. norm p), takes O ( (p−1) α2 γ2 ) corrections to separate the data with p-norm margin larger than (1 − α) γ, where γ is the (normalized) p-norm margin of the data. almap av...
متن کاملOn higher-order perceptron algorithms
A new algorithm for on-line learning linear-threshold functions is proposed which efficiently combines second-order statistics about the data with the ”logarithmic behavior” of multiplicative/dual-norm algorithms. An initial theoretical analysis is provided suggesting that our algorithm might be viewed as a standard Perceptron algorithm operating on a transformed sequence of examples with impro...
متن کاملOn{line Learning with a Perceptron
We study on{line learning of a linearly separable rule with a simple perceptron. Training utilizes a sequence of uncorrelated, randomly drawn N {dimensional input examples. In the thermodynamic limit the generalization error after training with P such examples can be calculated exactly. For the standard perceptron algorithm it decreases like (N=P) 1=3 for large P=N , in contrast to the faster (...
متن کاملOn-line Learning of Dichotomies
The performance of on line algorithms for learning dichotomies is studied In on line learn ing the number of examples P is equivalent to the learning time since each example is presented only once The learning curve or generalization error as a function of P depends on the schedule at which the learning rate is lowered For a target that is a perceptron rule the learning curve of the perceptron ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Neural networks : the official journal of the International Neural Network Society
دوره 17 2 شماره
صفحات -
تاریخ انتشار 2004